Leave security to security code. Or: Stop fixing bugs to make your software secure!

If you read about operating system security, it seems to be all about how many holes are discovered and how quickly they are fixed. If you look inside an OS vendor, you see lots of code auditing taking place. This assumes all security holes can be found and fixed, and that they can be eliminated more quickly that new ones are added. Stop fixing bugs already, and take security seriously!

Recently, German publisher heise.de interviewed Felix “fefe” von Leitner, a security expert and CCC spokesperson, on Mac OS X security:

heise.de: Apple has put protection mechanisms like Data Execution Prevention, Address Space Layout Randomization and Sandboxing into OS X. Shouldn’t that be enough?

Felix von Leitner: All these are mitigations that make exploiting the holes harder but not impossible. And: The underlying holes are still there! They have to close these holes, and not just make exploiting them harder. (Das sind alles Mitigations, die das Ausnutzen von Lücken schwerer, aber nicht unmöglich machen. Und: Die unterliegenden Lücken sind noch da! Genau die muss man schließen und nicht bloß das Ausnutzen schwieriger machen.)

Security mechanisms make certain bugs impossible to exploit

A lot is wrong about this statement. First of all, the term “harder” is used incorrectly in this context. Making “exploiting holes harder but not impossible” would mean that an attacker has to put more effort into writing exploit code for a certain security-relevant bug, but achieve the same at the end. While this is true for some special cases (with DEP on, certain code execution exploits can be converted into ROP exploits), but the whole point of mechanisms like DEP, ASLR and sandboxing is to make certain bugs impossible to exploit (e.g. directory traversal bugs can be made impossible with proper sandboxing) – while other bugs are unaffected (DEP can’t help against trashing of globals though an integer exploit). So mechanisms like DEP, ASLR and sandboxing make it harder to find exploitable bugs, not harder to exploit existing bugs. In other words: Every one of these mechanisms makes certain bugs non-exploitable, effectively decreasing the number of exploitable bugs in the system.

As a consequence, it does not matter whether the underlying bug is still there. It cannot be exploited. Imagine you have an application in a sandbox that restricts all file system accesses to /tmp – is it a bug if the application doesn’t check all user filenames redundantly? Does the US President have to lock the bedroom door in the White House or can he trust the building to be secure? Of course, a point can be made for diverse levels of barriers in systems of high security where a single breach can be desastrous and fixing a hole can be expensive (think: Xbox), but if you have to set priorities, it is smarter for the President to have security around the White House than locking every door behind himself.

Symmetric and asymmetric security work

When an operating systems company has to decide on how to allocate its resources, it needs to be aware of symmetric and asymmetric work. Finding and fixing bugs is symmetric work. You are as efficient finding and fixing bugs as attackers are finding and exploiting them. For every hour you spend fixing bugs, attackers have to spend one more hour searching for them, roughly speaking. Adding mechanisms like ASLR is asymmetric work. It may take you 1000 hours to get it implemented, but over time, it will waste more than 1000 hours of your attackers’ time – or make the attacker realize that it’s too much work and not worth attacking the system.

Leave security to security code

Divide your code into security code and non-security code. Security code needs to be written by people who have a security background, they keep the design and implementation simple and maintainable, and they are aware of common security pitfalls. Non-security is code that never deals with security. It can be written by anyone. If a non-security project requires a small module that deals with security (e.g. verifies a login), push it into a different process – which is then security code.

Imagine for example a small server application that just serves some data from your disk publicly. Attackers have exploited it to serve anything from disk or spread malware. Should you fix your application? Mind you, your application by itself has nothing to do with security. Why spend time on adding a security infrastructure to it, fixing some of the holes, ignoring others, and adding more, instead of properly partitioning responsibilities and having everyone do what they can do best: The kernel can sandbox the server so it can only access a single subdirectory, and it can’t write to the filesystem. And the server can stay simple as opposed to being bloated with security checks.

How many bugs in 1000 lines of code?

A lot of people seem to assume they can find and fix all bugs. Every non-trivial program contains at least one bug. The highly security-critical first-stage bootloader of the original Xbox was only 512 bytes in size. It consisted of about 200 hand-written assembly instructions. There was one bug in the design of the crypto system in the code, as well as two code execution bugs, one of which could be exploited in two different ways. In the revised version, one of the code execution bugs was fixed, and the crypto system had been replaced with one that had a different exploitable bug. Now extrapolate this to a 10+ MB binary like an HTML5 runtime (a.k.a web browser) and think about whether looking for holes and fixing them makes a lot of sense. And keep in mind that a web browser is not all security-critical assembly carefully handwritten by security professionals.

Conclusion

So stop looking for and fixing holes, this won’t make an impact. If the hackers find one, instead of researching “how this could happen” and educating the programmers responsible for it, construct a system that mitigates these attacks without the worker code having to be security aware. Leave security to security code.

19 thoughts on “Leave security to security code. Or: Stop fixing bugs to make your software secure!”

  1. I cannot decide whether to consider this piece of writing intelligent insight or outright stupidity.

    On one hand, it is of course the correct approach is preventing exploitable bugs from happening in the architecture. This is an approach I’ve been talking about for years.

    On the other hand, the article makes it sound like DEP+ASLR is such a technology, instead of the more rational choice of using a programming language that prevents buffer overflows. The fundamental insight is that a combination of an arbitrary read vulnerability plus an arbitrary write vulnerability will beat DEP+ALSR every time. The only complication in exploitation is that the read and write vulnerabilities one tends to find are not arbitrary, but limited, and that makes it a sport.

    Please clarify. If you’re saying “with DEP and ASLR, auditing code for vulns doesn’t make sense”, you’re spreading dangerous idiocy.

    Reply
  2. I do not agree. System level security measures (memory protection, user accounts, filesystem/object access control lists, sandboxing, DEP, ASLR, all the several things developed in the past 30 years) are necessary, of course. No modern operating system can be complete without them (except the embedded ones, of course), and they are the basis of security. But they are not enough.

    Firefox stores a lot of personal information. It caches the content. It saves a history of browsing. It helps you refill forms by reminding values you entered earlier. It even allows you to save the passwords for different sites. Some of this information is dangerous (you bank or PayPal passwords, for example, or the cached view of the last charges made to your credit card). And it is managed by Firefox itself. Even if it were stored in a separate, more secure process (which would involve privilege elevation at launch, and introduce usability problems), that data comes from Firefox and is read by Firefox. So a bug in Firefox could allow an attacker to retrieve, say, your PayPal account password. Just as if the secure storage process didn’t exist.

    So, even with all that security infrastructure there (which is absolutely necessary), writing secure code and fixing all known bugs, even those not security related, is a must. That there will always be bugs should not keep us from fighting them: what would happen if Police said “there will always be crime so there’s no point in prosecuting it”?

    Also, guys, we are way into the XXI Century, you know?. But we are still writing application code in a language developed more than 40 years ago to make it easier writing the low level code of a new OS kernel. A language with amazing control over memory and resources, but also with an amazing ease to slip things and commit buffer overflows and access to nonallocated or freed memory. Can’t we switch from C/C++ to more safe languages, like C# or Java? I think it’s more than time.

    Reply
  3. @Grijan
    Writing the low level code of a kernel in languages like c or asm is mostly done due to the fact that this low level stuff has to run directly on the processor without any virtual machine layers in between because of execution speed. There are projects like singularity (http://singularity.codeplex.com/) or cosmos (http://cosmos.codeplex.com/) that are written in sing# respectively c#. But the compiled assemblies are translated into native code and don’t have to run on any vm and even in this operating systems the very low level code is still written in c.

    Reply
  4. @Andreas: “User data cannot be trusted, use SSL!” 😉 No, “DEP+ASLR” is not such a technology, and I never said that, sorry for the confusion. DEP, ASLR, sandboxing, guard pages, canaries, partitioning security code away, layering and languages without pointers are all mechanisms that close certain exploitable holes. Please advise where I can change the wording to fix this.

    Also: I don’t really want to get into details of mitigation strategies here, but since you say that a r/w hole defeats ASLR – does this include ASLR + guard pages on 64 bit? If you have a huge address space and no idea where things are, and a wrong guess will kill the process, I don’t see how this can still be exploited.

    @Grijan: In your reasoning, you have an operating system like Windows/Linux/Mac OS that implements necessary security features, and then you have an application like Firefox. The problem is, Firefox is not an application. A music player or a calender are applications, but Firefox is a runtime. Like any HTML5 runtime, it is closer to a 1990s operating system (Mac OS Classic, Amiga OS, Windows 3) than to an application. It is a monolithic blob that executes untrusted code and has itself access to all important resources like the user’s data and passwords. When you run a web browser, your operating system becomes something closer to a VMM and cannot provide the same security infrastructure it provides to regular applications. Being the effective (paravirtualized) operating system, the web browser needs to implement security features, for example by being split into pieces, moving the part that deals with user data into a separate process and using UI to guard it against being used as an oracle. So yes, you are right, Firefox needs to deal with security. But apps should not.

    And yes, I absolutely agree with what you said about modern programming languages.

    Reply
  5. I’m sorry, but some of this article is incredibly naive. If you’ve sandboxed an app to only write to /tmp, a bug that causes the app to write to an arbitrarily named file has merely become more difficult to exploit, not impossible. You just now need to find an app with different privileges that can be made to read from some file in /tmp.

    And all code, not just security code, has the potential to contain security holes.

    Reply
  6. @naehrwert:
    Yes, I understand OSes need a low level language. Not only them, also VMs and, in some cases, runtimes. AFAIK, even Singularity and .NET’s CLR have some low level “glue” assembly code. But note that I was talking about *application* (or *user mode*) code.

    @Michael:
    Yes, you can look at Firefox (or any other browser) as a runtime. Right. Then, let’s look at Adobe Reader. It only does one thing, right? It’s just a document *viewer*. Not even an editor. A Viewer.

    But it turns that in 2010, it was the application with most vulnerabilities discovered and patched. I can’t remember actual numbers, but the quantity was near the total of the Windows OS (a whole OS!), and far greater than Microsoft Office or Mozilla Firefox.

    And exploiting most of them was as easy as sending someone a specially crafted PDF. 99% of people use Adobe Reader to view PDFs, so, bam!, you got your code executed at the user’s privilege level (administrator in over half of the cases). Should Adobe deal with security? I would be really worried if it didn’t!

    The same goes to Microsoft Office, Open Office, Windows Media Player, WinAmp, Photoshop, The Gimp, etc. The list is too long. Any application able to open documents created in another computer can be subject to security vulnerabilities! And you must deal with them, because you just can’t tell your users not to open documents sent form other computers.

    Reply
  7. @Russ:
    Not to mention that an application unable to access anything outside /tmp/ wouldn’t be able to do something really useful – even a game needs access to a private directory to store saved games or user made characters, levels or mods.

    The solution would be sandboxing and per-application (opposed to per-user) security tokens, allowing an application to read just the application directory and the shared libraries directory(es), and to write just to a private preferences directory, and maybe a subdirectory of the user’s documents folder. But even this could be too restrictive (have you seen a Java applet ask you for permission to read a file? It would be really annoying if Word and every other application did it!). And AFAIK no OS implements that.

    But even that wouldn’t fully protect the user: bugs would persist in the VM and OS code. Bugs that could be used to circumvent sandbox limitations. You can’t win.

    Reply
  8. Well Fefe is known for unqualified remarks regarding Apple products, he is just a hater who happens to have some followers, you can’t take him serious.

    Reply
  9. Taking Leitner seriously is really hard once you read the gibberish on his blog. This guy deserves to be ignored.

    Reply
  10. A security bug is still a bug. Bugs need to be fixed. Security bug can at the same time affect software’s stability for example or usability or something else.
    What you’re suggesting is basically to NOT to strive for perfection.

    http://openbsd.org/security.html :

    We are not so much looking for security holes, as we are looking for basic software bugs, and if years later someone discovers the problem used to be a security issue, and we fixed it because it was just a bug, well, all the better.

    Reply
  11. A bit inspired by the inconsistency blog: You are making an example that you found 3 security critical bugs in 512 bytes of code *carefully handwritten by security professionals*. But still you suggest that I should not fix bugs of my applications for security’s sake, but trust that to systems written by these security professionals. 😛

    Fefe’s comment is overgeneralized and useless. Still, knowing about security, or at least knowing what you don’t know about it is simply part of a software developer’s job. Or it should be. If it means fixing non-security bugs too, then even better. Write better code! It’s actually the part of the job you should be most proud of.

    Reply
  12. I think that some of the comments are missing the point. The point here is that some of the effort spent closing up little security holes would be completely unnecessary if fundamental changes were made system-wide that close off any chance of making use of those holes. Comments such as “Adobe Reader bugs let someone send a malicious PDF that then runs exploit code as an administrator” illustrate that there will always be a need for security patching in general, but the “code runs at the Administrator access level” issue is a well-known “bug” that has been a fundamental flaw in the Windows OS for a very long time: it is extremely difficult to run as an unprivileged user for a variety of reasons (applications assume they always have admin rights, privilege escalated installs not operating such that the limited account can use the software, etc.) If one cannot run needed applications as a limited user, even with the password for privilege escalation when required, then one will simply do what is necessary to run the desired software and permanently switch back to full administrative rights…bringing the very serious security issues right along with it.

    The point is that major work to change an aspect of the security model of a system requires more man-hours than patching one or two typical security holes in an application, but can reap far greater rewards by eliminating an entire subcategory of exploit types, effectively mass-patching those holes without directly touching them in the first place. DEP is a perfect example: if a buffer overflow relies on the overflow area being executable, and it’s marked no-execute, then a buffer overflow will not successfully run arbitrary code as intended. Granted, this example exchanges arbitrary code execution for denial of service, but I think we can all agree that the latter is highly preferable to the former…and the same example again illustrates why patching individual software flaws remains necessary despite features like DEP and sandboxing: DEP prevents potential infection, but patching the flaw prevents the infection without DEP as well as the DEP-triggered crash.

    In the end, both approaches combined yields the best results; the issue then is finding a balance between patching thousands of tiny holes and implementing major revisions the overall system.

    Reply
  13. Re @Jody at Tritech Computer Solutions:

    A partial solution, atleast for semi-advanced users running current/old Windows versions, would be a small “launcher” application which changes from administrator privilegies to lower privilegies. However, if that got widespread use malware coders would start using the unsolveable timer/desktop exploit…

    Reply

Leave a Comment